Zero and non-zero sum risk-sensitive Semi-Markov games

نویسندگان

چکیده

In this article we consider zero and non-zero sum risk-sensitive average criterion games for semi-Markov processes with a finite state space. For the zero-sum case, under suitable assumptions show that game has value. We also establish existence of stationary saddle point equilibrium. Nash

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Non-Zero Sum Games for Reactive Synthesis

In this invited contribution [7], we summarize new solution concepts useful for the synthesis of reactive systems that we have introduced in several recent publications. These solution concepts are developed in the context of non-zero sum games played on graphs. They are part of the contributions obtained in the inVEST project funded by the European Research Council.

متن کامل

Non-zero-sum Dresher inspection games

An inspection game is here a non-cooperative two-person-game between an inspector and an inspectee. It models a situation where the inspector controls the inspectee who has an incentive to violate certain legal obligations. A recent survey of inspection games applied to data verification, for example in nuclear material safeguards, is given in [1]. Dresher [2] described a sequential inspection ...

متن کامل

Value Function Approximation in Zero-Sum Markov Games

This paper investigates value function approximation in the context of zero-sum Markov games, which can be viewed as a generalization of the Markov decision process (MDP) framework to the two-agent case. We generalize error bounds from MDPs to Markov games and describe generalizations of reinforcement learning algorithms to Markov games. We present a generalization of the optimal stopping probl...

متن کامل

Sampling Techniques for Zero-sum, Discounted Markov Games

In this paper, we first present a key approximation result for zero-sum, discounted Markov games, providing bounds on the state-wise loss and the loss in the sup norm resulting from using approximate Q-functions. Then we extend the policy rollout technique for MDPs to Markov games. Using our key approximation result, we prove that, under certain conditions, the rollout technique gives rise to a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Stochastic Analysis and Applications

سال: 2021

ISSN: ['1532-9356', '0736-2994']

DOI: https://doi.org/10.1080/07362994.2021.1993447